65 research outputs found

    Understanding computer security

    Get PDF
    Few things in society and everyday life have changed in the last 10 years as much as the concept of security. From bank robberies to wars, what used to imply a great deal of violence is now silently happening on the Internet. Perhaps more strikingly, the very idea of privacy – a concept closely related to that of individual freedom – is undergoing such a profound revolution that people are suddenly unable to make rational and informed decisions: we protested for the introduction of RFID tags (Kelly and Erickson, 2005; Lee and Kim, 2006) and now we throw away en-masse most of our private information by subscribing to services (social media, free apps, cloud services), which have their reason of existence in the commerce of intimate personal data. The ICT revolution has changed the game, and the security paradigms that were suitable for people and systems just up to 10 years ago are now obsolete. It looks like we do not know what to replace them with. As of today, we keep patching systems but we do not understand how to make them reasonably secure (Rice, 2007); perhaps more importantly, we do not understand what reasonable privacy guarantees are for human beings, let alone how to enforce them. We do not understand how to combine accountability and freedom in this new world, in which firewalls and digital perimeters cannot guarantee security and privacy any longer. We believe that the root of the challenge that we face is understanding security and how information technology can enable and support such an understanding. And just like security is a broad, multidisciplinary topic covering technical as well as non-technical issues, the challenge of understanding security is a multifaceted one, spanning across a myriad of noteworthy topics. Here, we mention just three that we consider particularly important

    GEM : a distributed goal evaluation algorithm for trust management

    Get PDF
    Trust Management (TM) is an approach to distributed access control where access decisions are based on policy statements issued by multiple principals and stored in a distributed manner. Most of the existing goal evaluation algorithms for TM either rely on a centralized evaluation strategy, which consists of collecting all the relevant policy statements in a single location (and therefore they do not guarantee the confidentiality of intensional policies), or do not detect the termination of the computation (i.e., when all the answers of a goal are computed). In this paper we present GEM, a distributed goal evaluation algorithm for TM systems. GEM detects termination in a completely distributed way without the need of disclosing intensional policies, thereby preserving their confidentiality. We demonstrate that the algorithm terminates and is sound and complete w.r.t. the standard semantics for logic programs

    Requirements engineering within a large-scale security-oriented research project : lessons learned

    Get PDF
    Requirements engineering has been recognized as a fundamental phase of the software engineering process. Nevertheless, the elicitation and analysis of requirements are often left aside in favor of architecture-driven software development. This tendency, however, can lead to issues that may affect the success of a project. This paper presents our experience gained in the elicitation and analysis of requirements in a large-scale security-oriented European research project, which was originally conceived as an architecture-driven project. In particular, we illustrate the challenges that can be faced in large-scale research projects and consider the applicability of existing best practices and off-the-shelf methodologies with respect to the needs of such projects. We then discuss how those practices and methods can be integrated into the requirements engineering process and possibly improved to address the identified challenges. Finally, we summarize the lessons learned from our experience and the benefits that a proper requirements analysis can bring to a project

    Learning predictive cognitive maps with spiking neurons during behaviour and replays

    Get PDF
    The hippocampus has been proposed to encode environments using a representation that contains predictive information about likely future states, called the successor representation. However, it is not clear how such a representation could be learned in the hippocampal circuit. Here, we propose a plasticity rule that can learn this predictive map of the environment using a spiking neural network. We connect this biologically plausible plasticity rule to reinforcement learning, mathematically and numerically showing that it implements the TD-lambda algorithm. By spanning these different levels, we show how our framework naturally encompasses behavioral activity and replays, smoothly moving from rate to temporal coding, and allows learning over behavioral timescales with a plasticity rule acting on a timescale of milliseconds. We discuss how biological parameters such as dwelling times at states, neuronal firing rates and neuromodulation relate to the delay discounting parameter of the TD algorithm, and how they influence the learned representation. We also find that, in agreement with psychological studies and contrary to reinforcement learning theory, the discount factor decreases hyperbolically with time. Finally, our framework suggests a role for replays, in both aiding learning in novel environments and finding shortcut trajectories that were not experienced during behavior, in agreement with experimental data

    Overview and commentary of the CDEI's extended roadmap to an effective AI assurance ecosystem

    Get PDF
    In recent years, the field of ethical artificial intelligence (AI), or AI ethics, has gained traction and aims to develop guidelines and best practices for the responsible and ethical use of AI across sectors. As part of this, nations have proposed AI strategies, with the UK releasing both national AI and data strategies, as well as a transparency standard. Extending these efforts, the Centre for Data Ethics and Innovation (CDEI) has published an AI Assurance Roadmap, which is the first of its kind and provides guidance on how to manage the risks that come from the use of AI. In this article, we provide an overview of the document's vision for a “mature AI assurance ecosystem” and how the CDEI will work with other organizations for the development of regulation, industry standards, and the creation of AI assurance practitioners. We also provide a commentary of some key themes identified in the CDEI's roadmap in relation to (i) the complexities of building “justified trust”, (ii) the role of research in AI assurance, (iii) the current developments in the AI assurance industry, and (iv) convergence with international regulation

    The functional role of sequentially neuromodulated synaptic plasticity in behavioural learning.

    Get PDF
    To survive, animals have to quickly modify their behaviour when the reward changes. The internal representations responsible for this are updated through synaptic weight changes, mediated by certain neuromodulators conveying feedback from the environment. In previous experiments, we discovered a form of hippocampal Spike-Timing-Dependent-Plasticity (STDP) that is sequentially modulated by acetylcholine and dopamine. Acetylcholine facilitates synaptic depression, while dopamine retroactively converts the depression into potentiation. When these experimental findings were implemented as a learning rule in a computational model, our simulations showed that cholinergic-facilitated depression is important for reversal learning. In the present study, we tested the model's prediction by optogenetically inactivating cholinergic neurons in mice during a hippocampus-dependent spatial learning task with changing rewards. We found that reversal learning, but not initial place learning, was impaired, verifying our computational prediction that acetylcholine-modulated plasticity promotes the unlearning of old reward locations. Further, differences in neuromodulator concentrations in the model captured mouse-by-mouse performance variability in the optogenetic experiments. Our line of work sheds light on how neuromodulators enable the learning of new contingencies

    Measuring privacy compliance with process specifications

    No full text
    Enforcement relies on the idea that infringements are violations and as such should not be allowed. However, this notion is very restrictive and cannot be applied in unpredictable domains like healthcare. To address this issue, we need conformance metrics for detecting and quantifying infringements of policies and procedures. However, existing metrics usually consider every deviation from specifications equally making them inadequate to measure the severity of infringements. In this paper, we identify a number of factors which can be used to quantify deviations from process specifications. These factors drive the definition of metrics that allow for a more accurate measurement of privacy infringements. We demonstrate how the proposed approach can be adopted to enhance existing conformance metrics through a case study on the provisioning of healthcare treatment
    • …
    corecore